Search Results: "francois"

23 May 2015

Francois Marier: Usual Debian Server Setup

I manage a few servers for myself, friends and family as well as for the Libravatar project. Here is how I customize recent releases of Debian on those servers.

Hardware tests
apt-get install memtest86+ smartmontools e2fsprogs
Prior to spending any time configuring a new physical server, I like to ensure that the hardware is fine. To check memory, I boot into memtest86+ from the grub menu and let it run overnight. Then I check the hard drives using:
smartctl -t long /dev/sdX
badblocks -swo badblocks.out /dev/sdX

Configuration
apt-get install etckeepr git sudo vim
To keep track of the configuration changes I make in /etc/, I use etckeeper to keep that directory in a git repository and make the following changes to the default /etc/etckeeper/etckeeper.conf:
  • turn off daily auto-commits
  • turn off auto-commits before package installs
To get more control over the various packages I install, I change the default debconf level to medium:
dpkg-reconfigure debconf
Since I use vim for all of my configuration file editing, I make it the default editor:
update-alternatives --config editor

ssh
apt-get install openssh-server mosh fail2ban
Since most of my servers are set to UTC time, I like to use my local timezone when sshing into them. Looking at file timestamps is much less confusing that way. I also ensure that the locale I use is available on the server by adding it the list of generated locales:
dpkg-reconfigure locales
Other than that, I harden the ssh configuration and end up with the following settings in /etc/ssh/sshd_config (jessie):
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com
UsePrivilegeSeparation sandbox
AuthenticationMethods publickey
PasswordAuthentication no
PermitRootLogin no
AcceptEnv LANG LC_* TZ
LogLevel VERBOSE
AllowGroups sshuser
or the following for wheezy servers:
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256
On those servers where I need duplicity/paramiko to work, I also add the following:
KexAlgorithms ...,diffie-hellman-group-exchange-sha1
MACs ...,hmac-sha1
Then I remove the "Accepted" filter in /etc/logcheck/ignore.d.server/ssh (first line) to get a notification whenever anybody successfully logs into my server. I also create a new group and add the users that need ssh access to it:
addgroup sshuser
adduser francois sshuser
and add a timeout for root sessions by putting this in /root/.bash_profile:
TMOUT=600

Security checks
apt-get install logcheck logcheck-database fcheck tiger debsums corekeeper
apt-get remove john john-data rpcbind tripwire
Logcheck is the main tool I use to keep an eye on log files, which is why I add a few additional log files to the default list in /etc/logcheck/logcheck.logfiles:
/var/log/apache2/error.log
/var/log/mail.err
/var/log/mail.warn
/var/log/mail.info
/var/log/fail2ban.log
while ensuring that the apache logfiles are readable by logcheck:
chmod a+rx /var/log/apache2
chmod a+r /var/log/apache2/*
and fixing the log rotation configuration by adding the following to /etc/logrotate.d/apache2:
create 644 root adm
I also modify the main logcheck configuration file (/etc/logcheck/logcheck.conf):
INTRO=0
FQDN=0
Other than that, I enable daily checks in /etc/default/debsums and customize a few tiger settings in /etc/tiger/tigerrc:
Tiger_Check_RUNPROC=Y
Tiger_Check_DELETED=Y
Tiger_Check_APACHE=Y
Tiger_FSScan_WDIR=Y
Tiger_SSH_Protocol='2'
Tiger_Passwd_Hashes='sha512'
Tiger_Running_Procs='rsyslogd cron atd /usr/sbin/apache2 postgres'
Tiger_Listening_ValidProcs='sshd mosh-server ntpd'

General hardening
apt-get install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra
While the harden packages are configuration-free, AppArmor must be manually enabled:
perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub
update-grub

Entropy and timekeeping
apt-get install haveged rng-tools ntp
To keep the system clock accurate and increase the amount of entropy available to the server, I install the above packages and add the tpm_rng module to /etc/modules.

Preventing mistakes
apt-get install molly-guard safe-rm sl
The above packages are all about catching mistakes (such as accidental deletions). However, in order to extend the molly-guard protection to mosh sessions, one needs to manually apply a patch.

Package updates
apt-get install apticron unattended-upgrades deborphan debfoster apt-listchanges update-notifier-common aptitude popularity-contest
These tools help me keep packages up to date and remove unnecessary or obsolete packages from servers. On Rackspace servers, a small configuration change is needed to automatically update the monitoring tools. In addition to this, I use the update-notifier-common package along with the following cronjob in /etc/cron.daily/reboot-required:
#!/bin/sh
cat /var/run/reboot-required 2> /dev/null   true
to send me a notification whenever a kernel update requires a reboot to take effect.

Handy utilities
apt-get install renameutils atool iotop sysstat lsof mtr-tiny
Most of these tools are configure-free, except for sysstat, which requires enabling data collection in /etc/default/sysstat to be useful.

Apache configuration
apt-get install apache2-mpm-event
While configuring apache is often specific to each server and the services that will be running on it, there are a few common changes I make. I enable these in /etc/apache2/conf.d/security:
<Directory />
    AllowOverride None
    Order Deny,Allow
    Deny from all
</Directory>
ServerTokens Prod
ServerSignature Off
and remove cgi-bin directives from /etc/apache2/sites-enabled/000-default. I also create a new /etc/apache2/conf.d/servername which contains:
ServerName machine_hostname

Mail
apt-get install postfix
Configuring mail properly is tricky but the following has worked for me. In /etc/hostname, put the bare hostname (no domain), but in /etc/mailname put the fully qualified hostname. Change the following in /etc/postfix/main.cf:
inet_interfaces = loopback-only
myhostname = (fully qualified hostname)
smtp_tls_security_level = may
smtp_tls_protocols = !SSLv2, !SSLv3
Set the following aliases in /etc/aliases:
  • set francois as the destination of root emails
  • set an external email address for francois
  • set root as the destination for www-data emails
before running newaliases to update the aliases database. Create a new cronjob (/etc/cron.hourly/checkmail):
#!/bin/sh
ls /var/mail
to ensure that email doesn't accumulate unmonitored on this box. Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then test the whole setup using mail root.

Network tuning To reduce the server's contribution to bufferbloat I change the default kernel queueing discipline (jessie or later) by putting the following in /etc/sysctl.conf:
net.core.default_qdisc=fq_codel

3 April 2015

Francois Marier: Using OpenVPN on Android Lollipop

I use my Linode VPS as a VPN endpoint for my laptop when I'm using untrusted networks and I wanted to do the same on my Android 5 (Lollipop) phone. It turns out that it's quite easy to do (doesn't require rooting your phone) and that it works very well.

Install OpenVPN Once you have installed and configured OpenVPN on the server, you need to install the OpenVPN app for Android (available both on F-Droid and Google Play). From the easy-rsa directory you created while generating the server keys, create a new keypair for your phone:
./build-key nexus6        # "nexus6" as Name, no password
and then copy the following files onto your phone:
  • ca.crt
  • nexus6.crt
  • nexus6.key
  • ta.key

Create a new VPN config If you configured your server as per my instructions, these are the settings you'll need to use on your phone: Basic:
  • LZO Compression: YES
  • Type: Certificates
  • CA Certificate: ca.crt
  • Client Certificate: nexus6.crt
  • Client Certificate Key: nexus6.key
Server list:
  • Server address: hafnarfjordur.fmarier.org
  • Port: 1194
  • Protocol: UDP
  • Custom Options: NO
Authentication/Encryption:
  • Expect TLS server certificate: YES
  • Certificate hostname check: YES
  • Remote certificate subject: server
  • Use TLS Authentication: YES
  • TLS Auth File: ta.key
  • TLS Direction: 1
  • Encryption cipher: AES-256-CBC
  • Packet authentication: SHA384 (not SHA-384)
That's it. Everything else should work with the defaults.

25 March 2015

Francois Marier: Keeping up with noisy blog aggregators using PlanetFilter

I follow a few blog aggregators (or "planets") and it's always a struggle to keep up with the amount of posts that some of these get. The best strategy I have found so far to is to filter them so that I remove the blogs I am not interested in, which is why I wrote PlanetFilter.

Other options In my opinion, the first step in starting a new free software project should be to look for a reason not to do it :) So I started by looking for another approach and by asking people around me how they dealt with the firehoses that are Planet Debian and Planet Mozilla. It seems like a lot of people choose to "randomly sample" planet feeds and only read a fraction of the posts that are sent through there. Personally however, I find there are a lot of authors whose posts I never want to miss so this option doesn't work for me. A better option that other people have suggested is to avoid subscribing to the planet feeds, but rather to subscribe to each of the author feeds separately and prune them as you go. Unfortunately, this whitelist approach is a high maintenance one since planets constantly add and remove feeds. I decided that I wanted to follow a blacklist approach instead.

PlanetFilter PlanetFilter is a local application that you can configure to fetch your favorite planets and filter the posts you see. If you get it via Debian or Ubuntu, it comes with a cronjob that looks at all configuration files in /etc/planetfilter.d/ and outputs filtered feeds in /var/cache/planetfilter/. You can either:
  • add file:///var/cache/planetfilter/planetname.xml to your local feed reader
  • serve it locally (e.g. http://localhost/planetname.xml) using a webserver, or
  • host it on a server somewhere on the Internet.
The software will fetch new posts every hour and overwrite the local copy of each feed. A basic configuration file looks like this:
[feed]
url = http://planet.debian.org/atom.xml
[blacklist]

Filters There are currently two ways of filtering posts out. The main one is by author name:
[blacklist]
authors =
  Alice Jones
  John Doe
and the other one is by title:
[blacklist]
titles =
  This week in review
  Wednesday meeting for
In both cases, if a blog entry contains one of the blacklisted authors or titles, it will be discarded from the generated feed.

Tor support Since blog updates happen asynchronously in the background, they can work very well over Tor. In order to set that up in the Debian version of planetfilter:
  1. Install the tor and polipo packages.
  2. Set the following in /etc/polipo/config:
     proxyAddress = "127.0.0.1"
     proxyPort = 8008
     allowedClients = 127.0.0.1
     allowedPorts = 1-65535
     proxyName = "localhost"
     cacheIsShared = false
     socksParentProxy = "localhost:9050"
     socksProxyType = socks5
     chunkHighMark = 67108864
     diskCacheRoot = ""
     localDocumentRoot = ""
     disableLocalInterface = true
     disableConfiguration = true
     dnsQueryIPv6 = no
     dnsUseGethostbyname = yes
     disableVia = true
     censoredHeaders = from,accept-language,x-pad,link
     censorReferer = maybe
    
  3. Tell planetfilter to use the polipo proxy by adding the following to /etc/default/planetfilter:
     export http_proxy="localhost:8008"
     export https_proxy="localhost:8008"
    

Bugs and suggestions The source code is available on repo.or.cz. I've been using this for over a month and it's been working quite well for me. If you give it a go and run into any problems, please file a bug! I'm also interested in any suggestions you may have.

1 February 2015

Francois Marier: Upgrading Lenovo ThinkPad BIOS under Linux

The Lenovo support site offers downloadable BIOS updates that can be run either from Windows or from a bootable CD. Here's how to convert the bootable CD ISO images under Linux in order to update the BIOS from a USB stick.

Checking the BIOS version Before upgrading your BIOS, you may want to look up which version of the BIOS you are currently running. To do this, install the dmidecode package:
apt-get install dmidecode
then run:
dmidecode
or alternatively, look at the following file:
cat /sys/devices/virtual/dmi/id/bios_version

Updating the BIOS using a USB stick To update without using a bootable CD, install the genisoimage package:
apt-get install genisoimage
then use geteltorito to convert the ISO you got from Lenovo:
geteltorito -o bios.img gluj19us.iso
Insert a USB stick you're willing to erase entirely and then copy the image onto it (replacing sdX with the correct device name, not partition name, for the USB stick):
dd if=bios.img of=/dev/sdX
then restart and boot from the USB stick by pressing Enter, then F12 when you see the Lenovo logo.

26 January 2015

Francois Marier: Using unattended-upgrades on Rackspace's Debian and Ubuntu servers

I install the unattended-upgrades package on almost all of my Debian and Ubuntu servers in order to ensure that security updates are automatically applied. It works quite well except that I still need to login manually to upgrade my Rackspace servers whenever a new rackspace-monitoring-agent is released because it comes from a separate repository that's not covered by unattended-upgrades. It turns out that unattended-upgrades can be configured to automatically upgrade packages outside of the standard security repositories but it's not very well documented and the few relevant answers you can find online are still using the old whitelist syntax.

Initial setup The first thing to do is to install the package if it's not already done:
apt-get install unattended-upgrades
and to answer yes to the automatic stable update question. If you don't see the question (because your debconf threshold is too low -- change it with dpkg-reconfigure debconf), you can always trigger the question manually:
dpkg-reconfigure -plow unattended-upgrades
Once you've got that installed, the configuration file you need to look at is /etc/apt/apt.conf.d/50unattended-upgrades.

Whitelist matching criteria Looking at the unattended-upgrades source code, I found the list of things that can be used to match on in the whitelist:
  • origin (shortcut: o)
  • label (shortcut: l)
  • archive (shortcut: a)
  • suite (which is the same as archive)
  • component (shortcut: c)
  • site (no shortcut)
You can find the value for each of these fields in the appropriate _Release file under /var/lib/apt/lists/. Note that the value of site is the hostname of the package repository, also present in the first part these *_Release filenames (stable.packages.cloudmonitoring.rackspace.com in the example below). In my case, I was looking at the following inside /var/lib/apt/lists/stable.packages.cloudmonitoring.rackspace.com_debian-wheezy-x86%5f64_dists_cloudmonitoring_Release:
Origin: Rackspace
Codename: cloudmonitoring
Date: Fri, 23 Jan 2015 18:58:49 UTC
Architectures: i386 amd64
Components: main
...
which means that, in addition to site, the only things I could match on were origin and component since there are no Suite or Label fields in the Release file. This is the line I ended up adding to my /etc/apt/apt.conf.d/50unattended-upgrades:
 Unattended-Upgrade::Origins-Pattern  
         // Archive or Suite based matching:
         // Note that this will silently match a different release after
         // migration to the specified archive (e.g. testing becomes the
         // new stable).
 //      "o=Debian,a=stable";
 //      "o=Debian,a=stable-updates";
 //      "o=Debian,a=proposed-updates";
         "origin=Debian,archive=stable,label=Debian-Security";
         "origin=Debian,archive=oldstable,label=Debian-Security";
+        "origin=Rackspace,component=main";
  ;

Testing To ensure that the config is right and that unattended-upgrades will pick up rackspace-monitoring-agent the next time it runs, I used:
unattended-upgrade --dry-run --debug
which should output something like this:
Initial blacklisted packages: 
Starting unattended upgrades script
Allowed origins are: ['origin=Debian,archive=stable,label=Debian-Security', 'origin=Debian,archive=oldstable,label=Debian-Security', 'origin=Rackspace,component=main']
Checking: rackspace-monitoring-agent (["<Origin component:'main' archive:'' origin:'Rackspace' label:'' site:'stable.packages.cloudmonitoring.rackspace.com' isTrusted:True>"])
pkgs that look like they should be upgraded: rackspace-monitoring-agent
...
Option --dry-run given, *not* performing real actions
Packages that are upgraded: rackspace-monitoring-agent

Making sure that automatic updates are happening In order to make sure that all of this is working and that updates are actually happening, I always install apticron on all of the servers I maintain. It runs once a day and emails me a list of packages that need to be updated and it keeps doing that until the system is fully up-to-date. The only thing missing from this is getting a reminder whenever a package update (usually the kernel) requires a reboot to take effect. That's where the update-notifier-common package comes in. Because that package will add a hook that will create the /var/run/reboot-required file whenever a kernel update has been installed, all you need to do is create a cronjob like this in /etc/cron.daily/reboot-required:
#!/bin/sh
cat /var/run/reboot-required 2> /dev/null   true
assuming of course that you are already receiving emails sent to the root user (if not, add the appropriate alias in /etc/aliases and run newaliases).

7 January 2015

Dirk Eddelbuettel: RcppCNPy 0.2.4

A new release of the RcppCNPy package is now on CRAN. This release mostly solidifies and fixes things. Support for saving integer objects, which was expanded in release 0.2.3, was not entirely correct. Operations on big-endian systems were not up to snuff either. Wush Wu helped in getting this right with very diligent testing and patching particularly on big-endian hardware. We also got a pull request from Romain to reflect better const correctness at the Rcpp side of things. Last but not least we obliged by the CRAN Maintainers to not assume one could call gzip from system() call because, well, you guessed it.
Changes in version 0.2.4 (2015-01-05)
  • Support for saving integer objects was not correct and has been fixed.
  • Support for loading and saving on 'big endian' systems was incomplete, has been greatly expanded and corrected, thanks in large part to very diligent testing as well as patching by Wush Wu.
  • The implementation now uses const iterators, thanks to a pull request by Romain Francois.
  • The vignette no longer assumes that one can call gzip via system as the world's leading consumer OS may disagree.
CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the rcpp-devel mailing list off the R-Forge page for Rcpp is may be the best place to start a discussion. GitHub issue tickets are also welcome.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 December 2014

Francois Marier: Making Firefox Hello work with NoScript and RequestPolicy

Firefox Hello is a new beta feature in Firefox 34 which give users the ability to do plugin-free video-conferencing without leaving the browser (using WebRTC technology). If you cannot get it to work after adding the Hello button to the toolbar, this post may help.

Preferences to check There are a few preferences to check in about:config:
  • media.peerconnection.enabled should be true
  • network.websocket.enabled should be true
  • loop.enabled should be true
  • loop.throttled should be false

NoScript If you use the popular NoScript add-on, you will need to whitelist the following hosts:
  • about:loopconversation
  • hello.firefox.com
  • loop.services.mozilla.com
  • opentok.com
  • tokbox.com

RequestPolicy If you use the less popular but equally annoying RequestPolicy add-on, then you will need to whitelist the following destination host:
  • tokbox.com
as well as the following origin to destination mappings:
  • about:loopconversation -> firefox.com
  • about:loopconversation -> mozilla.com
  • about:loopconversation -> opentok.com
  • firefox.com -> mozilla.com
  • firefox.com -> mozilla.org
  • firefox.com -> opentok.com
  • mozilla.org -> firefox.com
I have unfortunately not been able to find a way to restrict tokbox.com to a set of (source, destination) pairs. I suspect that the use of websockets confuses RequestPolicy. If you find a more restrictive policy that works, please leave a comment!

26 November 2014

Francois Marier: Hiding network disconnections using an IRC bouncer

A bouncer can be a useful tool if you rely on IRC for team communication and instant messaging. The most common use of such a server is to be permanently connected to IRC and to buffer messages while your client is disconnected. However, that's not what got me interested in this tool. I'm not looking for another place where messages accumulate and wait to be processed later. I'm much happier if people email me when I'm not around. Instead, I wanted to do to irssi what mosh did to ssh clients: transparently handle and hide temporary disconnections. Here's how I set everything up.

Server setup The first step is to install znc:
apt-get install znc
Make sure you get the 1.0 series (in jessie or trusty, not wheezy or precise) since it has much better multi-network support. Then, as a non-root user, generate a self-signed TLS certificate for it:
openssl req -x509 -sha256 -newkey rsa:2048 -keyout znc.pem -nodes -out znc.crt -days 365
and make sure you use something like irc.example.com as the subject name, that is the URL you will be connecting to from your IRC client. Then install the certificate in the right place:
mkdir ~/.znc
mv znc.pem ~/.znc/
cat znc.crt >> ~/.znc/znc.pem
Once that's done, you're ready to create a config file for znc using the znc --makeconf command, again as the same non-root user:
  • create separate znc users if you have separate nicks on different networks
  • use your nickserv password as the server password for each network
  • enable ssl
  • say no to the chansaver and nickserv plugins
Finally, open the IRC port (tcp port 6697 by default) in your firewall:
iptables -A INPUT -p tcp --dport 6697 -j ACCEPT

Client setup (irssi) On the client side, the official documentation covers a number of IRC clients, but the irssi page was quite sparse. Here's what I used for the two networks I connect to (irc.oftc.net and irc.mozilla.org):
servers = (
   
    address = "irc.example.com";
    chatnet = "OFTC";
    password = "fmarier/oftc:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
   ,
   
    address = "irc.example.com";
    chatnet = "Mozilla";
    password = "francois/mozilla:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
   
);
Of course, you'll need to copy your znc.crt file from the server into ~/.irssi/certs/znc.crt. Make sure that you're no longer authenticating with the nickserv from within irssi. That's znc's job now.

Wrapper scripts So far, this is a pretty standard znc+irssi setup. What makes it work with my workflow is the wrapper script I wrote to enable znc before starting irssi and then prompt to turn it off after exiting:
#!/bin/bash
ssh irc.example.com "pgrep znc   znc"
irssi
read -p "Terminate the bouncer? [y/N] " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]
then
  ssh irc.example.com killall -sSIGINT znc
fi
Now, instead of typing irssi to start my IRC client, I use irc. If I'm exiting irssi before commuting or because I need to reboot for a kernel update, I keep the bouncer running. At the end of the day, I say yes to killing the bouncer. That way, I don't have a backlog to go through when I wake up the next day.

20 October 2014

Francois Marier: LXC setup on Debian jessie

Here's how to setup LXC-based "chroots" on Debian jessie. While this is documented on the Debian wiki, I had to tweak a few things to get the networking to work on my machine. Start by installing (as root) the necessary packages:
apt-get install lxc libvirt-bin debootstrap

Network setup I decided to use the default /etc/lxc/default.conf configuration (no change needed here):
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.hwaddr = 00:FF:AA:xx:xx:xx
lxc.network.ipv4 = 0.0.0.0/24
but I had to make sure that the "guests" could connect to the outside world through the "host":
  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:
    net.ipv4.ip_forward=1
    
  2. and then applying it using:
    sysctl -p
    
  3. Ensure that the network bridge is automatically started on boot:
    virsh -c lxc:/// net-start default
    virsh -c lxc:/// net-autostart default
    
  4. and that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:
    -A INPUT -d 224.0.0.251 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.255 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.1 -s 192.168.122.0/24 -j ACCEPT
    
  5. and applying the rules using:
    iptables-apply
    

Creating a container Creating a new container (in /var/lib/lxc/) is simple:
sudo MIRROR=http://http.debian.net/debian lxc-create -n sid64 -t debian -- -r sid -a amd64
You can start or stop it like this:
sudo lxc-start -n sid64 -d
sudo lxc-stop -n sid64

Connecting to a guest using ssh The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:
sudo lxc-stop -n sid64
sudo lxc-start -n sid64
then install a text editor inside the container because the root image doesn't have one by default:
apt-get install vim
then paste your public key in /root/.ssh/authorized_keys. Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:
sudo lxc-ls --fancy

Fixing Perl locale errors If you see a bunch of errors like these when you start your container:
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
then log into the container as root and use:
dpkg-reconfigure locales
to enable the same locales as the ones you have configured in the host.

30 September 2014

Francois Marier: Encrypted mailing list on Debian and Ubuntu

Running an encrypted mailing list is surprisingly tricky. One of the first challenges is that you need to decide what the threat model is. Are you worried about someone compromising the list server? One of the subscribers stealing the list of subscriber email addresses? You can't just "turn on encryption", you have to think about what you're trying to defend against. I decided to use schleuder. Here's how I set it up.

Requirements What I decided to create was a mailing list where people could subscribe and receive emails encrypted to them from the list itself. In order to post, they need to send an email encrypted to the list' public key and signed using the private key of a subscriber. What the list then does is decrypt the email and encrypts it individually for each subscriber. This protects the emails while in transit, but is vulnerable to the list server itself being compromised since every list email transits through there at some point in plain text.

Installing the schleuder package The first thing to know about installing schleuder on Debian or Ubuntu is that at the moment it unfortunately depends on ruby 1.8. This means that you can only install it on Debian wheezy or Ubuntu precise: trusty and jessie won't work (until schleuder is ported to a more recent version of ruby). If you're running wheezy, you're fine, but if you're running precise, I recommend adding my ppa to your /etc/apt/sources.list to get a version of schleuder that actually lets you create a new list without throwing an error. Then, simply install this package:
apt-get install schleuder

Postfix configuration The next step is to configure your mail server (I use postfix) to handle the schleuder lists. This may be obvious but if you're like me and you're repurposing a server which hasn't had to accept incoming emails, make sure that postfix is set to the following in /etc/postfix/main.cf:
inet_interfaces = all
Then follow the instructions from /usr/share/doc/schleuder/README.Debian and finally add the following line (thanks to the wiki instructions) to /etc/postfix/main.cf:
local_recipient_maps = proxy:unix:passwd.byname $alias_maps $transport_maps

Creating a new list Once everything is set up, creating a new list is pretty easy. Simply run schleuder-newlist list@example.org and follow the instructions After creating your list, remember to update /etc/postfix/transports and run postmap /etc/postfix/transports. Then you can test it by sending an email to LISTNAME-sendkey@example.com. You should receive the list's public key.

Adding list members Once your list is created, the list admin is the only subscriber. To add more people, you can send an admin email to the list or follow these instructions to do it manually:
  1. Get the person's GPG key: gpg --recv-key KEYID
  2. Verify that the key is trusted: gpg --fingerprint KEYID
  3. Add the person to the list's /var/lib/schleuder/HOSTNAME/LISTNAME/members.conf:
    - email: francois@fmarier.org
      key_fingerprint: 8C470B2A0B31568E110D432516281F2E007C98D1
    
  4. Export the public key: gpg --export -a KEYID
  5. Paste the exported key into the list's keyring: sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import

30 August 2014

Francois Marier: Outsourcing your webapp maintenance to Debian

Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end. Here's an example from the Node.js back-end of a real application:
$ npm list   wc -l
256
What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable. However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything". What if we could build on the work done by Debian maintainers and the security team?

Case study - the Libravatar project As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.

Description Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site. From a developer point of view, it's a fairly simple stack: The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites. Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS. For example, francois@debian.org hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:
http://cdn.libravatar.org/avatar/7cc352a2907216992f0f16d2af50b070
whereas francois@fmarier.org hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:
http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9
due to the presence of an SRV record on fmarier.org.

Ground rules The main rules that the project follows is to:
  1. only use Python libraries that are in Debian
  2. use the versions present in the latest stable release (including backports)

Deployment using packages In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:
  • It includes an "upstream" Makefile which minifies CSS and JavaScript, gzips them, and compiles PO files (i.e. a "build" step).
  • The Makefile includes a test target which runs the unit tests and some lint checks (pylint, pyflakes and pep8).
  • Debian packages are produced to encode the dependencies in the standard way as well as to run various setup commands in maintainer scripts and install cron jobs.
  • The project runs its own package repository using reprepro to easily distribute these custom packages.
  • In order to update the repository and the packages installed on servers that we control, we use fabric, which is basically a fancy way to run commands over ssh.
  • Mirrors can simply add our repository to their apt sources.list and upgrade Libravatar packages at the same time as their system packages.

Results Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run. The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time. There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet. Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.

Problems The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly. Another problem we faced is that because we use the Debian version of jQuery and minify our own JavaScript files in the build step of the Makefile, we were affected by the removal from that package of the minified version of jQuery. In our setup, there is no way to minify JavaScript files that are provided by other packages and so the only way to fix this would be to fork the package in our repository or (preferably) to work with the Debian maintainer and get it fixed globally in Debian. One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates. On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:
  1. You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
  2. or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.
Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.

Is it realistic? It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar. Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden. The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly. While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies. This blog post is based on a talk I gave at DebConf 14: slides, video.

21 July 2014

Francois Marier: Creating a modern tiling desktop environment using i3

Modern desktop environments like GNOME and KDE involving a lot of mousing around and I much prefer using the keyboard where I can. This is why I switched to the Ion tiling window manager back when I interned at Net Integration Technologies and kept using it until I noticed it had been removed from Debian. After experimenting with awesome for 2 years and briefly considering xmonad , I finally found a replacement I like in i3. Here is how I customized it and made it play nice with the GNOME and KDE applications I use every day.

Startup script As soon as I log into my desktop, my startup script starts a few programs, including: Because of a bug in gnome-settings-daemon which makes the mouse cursor disappear as soon as gnome-settings-daemon is started, I had to run the following to disable the offending gnome-settings-daemon plugin:
dconf write /org/gnome/settings-daemon/plugins/cursor/active false

Screensaver In addition, gnome-screensaver didn't automatically lock my screen, so I installed xautolock and added it to my startup script:
xautolock -time 30 -locker "gnome-screensaver-command --lock" &
to lock the screen using gnome-screensaver after 30 minutes of inactivity. I can also trigger it manually using the following shortcut defined in my ~/.i3/config:
bindsym Ctrl+Mod1+l exec xautolock -locknow

Keyboard shortcuts While keyboard shortcuts can be configured in GNOME, they don't work within i3, so I added a few more bindings to my ~/.i3/config:
# volume control
bindsym XF86AudioLowerVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '-5%'
bindsym XF86AudioRaiseVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '+5%'
# brightness control
bindsym XF86MonBrightnessDown exec xbacklight -steps 1 -time 0 -dec 5
bindsym XF86MonBrightnessUp exec xbacklight -steps 1 -time 0 -inc 5
bindsym XF86AudioMute exec /usr/bin/pactl set-sink-mute @DEFAULT_SINK@ toggle
# show battery stats
bindsym XF86Battery exec gnome-power-statistics
to make volume control, screen brightness and battery status buttons work as expected on my laptop. These bindings require the following packages:

Keyboard layout switcher Another thing that used to work with GNOME and had to re-create in i3 is the ability to quickly toggle between two keyboard layouts using the keyboard. To make it work, I wrote a simple shell script and assigned a keyboard shortcut to it in ~/.i3/config:
bindsym $mod+u exec /home/francois/bin/toggle-xkbmap

Suspend script Since I run lots of things in the background, I have set my laptop to avoid suspending when the lid is closed by putting the following in /etc/systemd/login.conf:
HandleLidSwitch=lock
Instead, when I want to suspend to ram, I use the following keyboard shortcut:
bindsym Ctrl+Mod1+s exec /home/francois/bin/s2ram
which executes a custom suspend script to clear the clipboards (using xsel), flush writes to disk and lock the screen before going to sleep. To avoid having to type my sudo password every time pm-suspend is invoked, I added the following line to /etc/sudoers:
francois  ALL=(ALL)  NOPASSWD:  /usr/sbin/pm-suspend

Window and workspace placement hacks While tiling window managers promise to manage windows for you so that you can focus on more important things, you will most likely want to customize window placement to fit your needs better.

Working around misbehaving applications A few applications make too many assumptions about window placement and are just plain broken in tiling mode. Here's how to automatically switch them to floating mode:
for_window [class="VidyoDesktop"] floating enable
You can get the Xorg class of the offending application by running this command:
xprop   grep WM_CLASS
before clicking on the window.

Keeping IM windows on the first workspace I run Pidgin on my first workspace and I have the following rule to keep any new window that pops up (e.g. in response to a new incoming message) on the same workspace:
assign [class="Pidgin"] 1

Automatically moving workspaces when docking Here's a neat configuration blurb which automatically moves my workspaces (and their contents) from the laptop screen (eDP1) to the external monitor (DP2) when I dock my laptop:
# bind workspaces to the right monitors
workspace 1 output DP2
workspace 2 output DP2
workspace 3 output DP2
workspace 4 output DP2
workspace 5 output DP2
workspace 6 output eDP1
You can get these output names by running:
xrandr --display :0   grep " connected"
Finally, because X sometimes fail to detect my external monitor when docking/undocking, I also wrote a script to set the displays properly and bound it to the appropriate key on my laptop:
bindsym XF86Display exec /home/francois/bin/external-monitor

7 May 2014

Mario Lang: Planet bug: empty alt tags for hackergotchis

There is a strange bug in Planet Debian I am seeing since I joined. It is rather minor, but since it is an accessibility bug, I'd like to mention it here. I have written to the Planet Debian maintainers, and was told to figure it out myself. This is a pattern, accessibility is considered wishlist, apparently. And the affected people are supposed to fix it on their own. It is better if I don't say anything more about that attitude.
The Bug On Planet Debian, only some people have an alt tag for their hackergotchi, while all the configured entries look similar. There is no obvious difference in the configuration, but still, only some users here have a proper alt tag for their hackergotchi. Here is a list:
  • Dirk Eddelbuettel
  • Steve Kemp
  • Wouter Verhelst
  • Mehdi (noreply@blogger.com)
  • Andrew Pollock
  • DebConf Organizers
  • Francois Marier
  • The MirOS Project (tg@mirbsd.org)
  • Paul Tagliamonte
  • Lisandro Dami n Nicanor P rez Meyer (noreply@blogger.com)
  • Joey Hess
  • Chris Lamb
  • Mirco Bauer
  • Christine Spang
  • Guido G nther
These people/organisations currently displayed on Planet Debian have a proper alt tag for their hackergotchi. All the other members have none. In Lynx, it looks like the following:
hackergotchi for
And for those where it works, it looks like:
hackergotchi for Dirk Eddelbuettel
Strange, isn't it? If you have any idea why this might be happening, let me know, or even better, tell Planet Debian maintainers how to fix it. P.S.: Package planet-venus says it is a rewrite of Planet, and Planet can be found in Debian as well. I don't see it in unstable, maybe I am blind? Or has it been removed recently? If so, the package description of planet-venus is wrong.

4 May 2014

Francois Marier: What's in a debian/ directory?

If you're looking to get started at packaging free software for Debian, you should start with the excellent New Maintainers' Guide or the Introduction to Debian Packaging on the Debian wiki. Once you know the basics, or if you prefer to learn by example, you may be interested in the full walkthrough which follows. We will look at the contents of three simple packages.

node-libravatar This package is a node.js library for the Libravatar service. Version 2.0.0-3 of that package contains the following files in its debian/ directory:
  • changelog
  • compat
  • control
  • copyright
  • docs
  • node-libravatar.install
  • rules
  • source/format
  • watch

debian/control
Source: node-libravatar
Priority: extra
Maintainer: Francois Marier <francois@debian.org>
Build-Depends: debhelper (>= 9)
Standards-Version: 3.9.4
Section: web
Homepage: https://github.com/fmarier/node-libravatar
Vcs-Git: git://git.debian.org/collab-maint/node-libravatar.git
Vcs-Browser: http://git.debian.org/?p=collab-maint/node-libravatar.git;a=summary
Package: node-libravatar
Architecture: all
Depends: $ shlibs:Depends , $ misc:Depends , nodejs
Description: libravatar library for NodeJS
 This library allows web application authors to make use of the free Libravatar
 service (https://www.libravatar.org). This service hosts avatar images for
 users and allows other sites to look them up using email addresses.
 .
 node-libravatar includes full support for federated avatar servers.
This is probably the most important file since it contains the bulk of the metadata about this package. Maintainer is a required field listing the maintainer of that package, which can be a person or a team. It only contains a single value though, any co-maintainers will be listed under the optional Uploaders field. Build-Depends lists the packages which are needed to build the package (e.g. a compiler), as opposed to those which are needed to install the binary package (e.g. a library it uses). Standards-Version refers to the version of the Debian Policy that this package complies with. The Homepage field refers to the upstream homepage, whereas the Vcs-* fields point to the repository where the packaging is stored. If you take a look at the node-libravatar packaging repository you will see that it contains three branches:
  • upstream is the source as it was in the tarball downloaded from upstream.
  • master is the upstream branch along with all of the Debian customizations.
  • pristine-tar is unrelated to the other two branches and is used by the pristine-tar tool to reconstitute the original upstream tarball as needed.
After these fields comes a new section which starts with a Package field. This is the definition of a binary package, not to be confused with the Source field at the top of this file, which refers to the name of the source package. In this particular example, they are both the same and there is only one of each, however this is not always the case, as we'll see later. Inside that binary package definition, lives the Architecture field which is normally one of these two:
  • all for a binary package that will work on all architectures but only needs to be built once
  • any for a binary package that will work everywhere but that will need to be built separately for each architecture
Finally, the last field worth pointing out is the Depends field which lists all of the runtime dependencies that the binary package has. This is what will be pulled in by apt-get when you apt-get install node-libravatar. The two variables will be substituted later by debhelper.

debian/changelog
node-libravatar (2.0.0-3) unstable; urgency=low
  * debian/watch: poll github directly
  * Bump Standards-Version up to 3.9.4
 -- Francois Marier <francois@debian.org>  Mon, 20 May 2013 12:07:49 +1200
node-libravatar (2.0.0-2) unstable; urgency=low
  * More precise license tag and upstream contact in debian/copyright
 -- Francois Marier <francois@debian.org>  Tue, 29 May 2012 22:51:03 +1200
node-libravatar (2.0.0-1) unstable; urgency=low
  * New upstream release
    - new non-backward-compatible API
 -- Francois Marier <francois@debian.org>  Mon, 07 May 2012 14:54:19 +1200
node-libravatar (1.1.1-1) unstable; urgency=low
  * Initial release (Closes: #661771)
 -- Francois Marier <francois@debian.org>  Fri, 02 Mar 2012 15:29:57 +1300
This may seem at first like a mundane file, but it is very important since it is the canonical source of the package version (2.0.0-3 in this case). This is the only place where you need to bump the package version when uploading a new package to the Debian archive. The first line also includes the distribution where the package will be uploaded. It is usually one of these values:
  • unstable for the vast majority of uploads
  • stable for uploads that have been approved by the release maintainers and fix serious bugs in the stable version of Debian
  • stable-security for security fixes to the stable version of Debian that cannot wait until the next stable point release and have been approved by the security team
Packages uploaded to unstable will migrate automatically to testing provided that a few conditions are met (e.g. no release-critical bugs were introduced). The length of time before that migration is influenced by the urgency field (low, medium or high) in the changelog entry. Another thing worth noting is that the first upload normally needs to close an ITP (Intent to Package) bug.

debian/rules
#!/usr/bin/make -f
# -*- makefile -*-
%:
    dh $@ 
override_dh_auto_test:
As can be gathered from the first two lines of this file, this is a Makefile. This is what controls how the package is built. There's not much to see and that's because most of its content is automatically added by debhelper. So let's look at it in action by building the package:
$ git buildpackage -us -uc
and then looking at parts of the build log (../node-libravatar_2.0.0-3_amd64.build):
 fakeroot debian/rules clean
dh clean 
   dh_testdir
   dh_auto_clean
   dh_clean
One of the first things we see is the debian/rules file being run with the clean target. To find out what that does, have a look at the dh_auto_clean which states that it will attempt to delete build residues and run something like make clean using the upstream Makefile.
 debian/rules build
dh build 
   dh_testdir
   dh_auto_configure
   dh_auto_build
Next we see the build target being invoked and looking at dh_auto_configure we see that this will essentially run ./configure and its equivalents. The dh_auto_build helper script then takes care of running make (or equivalent) on the upstream code. This should be familiar to anybody who has ever built a piece of free software from scratch and has encountered the usual method for building from source:
./configure
make
make install
Finally, we get to actually build the .deb:
 fakeroot debian/rules binary
dh binary 
   dh_testroot
   dh_prep
   dh_installdirs
   dh_auto_install
   dh_install
...
   dh_md5sums
   dh_builddeb
dpkg-deb: building package  node-libravatar' in  ../node-libravatar_2.0.0-3_all.deb'.
Here we see a number of helpers, including dh_auto_install which takes care of running make install. Going back to the debian/rules, we notice that there is manually defined target at the bottom of the file:
override_dh_auto_test:
which essentially disables dh_auto_test by replacing it with an empty set of commands. The reason for this becomes clear when we take a look at the test target of the upstream Makefile and the dependencies it has: tap, a node.js library that is not yet available in Debian. In other words, we can't run the test suite on the build machines so we need to disable it here.

debian/compat
9
This file simply specifies the version of debhelper that is required by the various helpers used in debian/rules. Version 9 is the latest at the moment.

debian/copyright
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: node-libravatar
Upstream-Contact: Francois Marier <francois@libravatar.org>
Source: https://github.com/fmarier/node-libravatar
Files: *
Copyright: 2011 Francois Marier <francois@libravatar.org>
License: Expat
Files: debian/*
Copyright: 2012 Francois Marier <francois@debian.org>
License: Expat
License: Expat
 Permission is hereby granted, free of charge, to any person obtaining a copy of this
 software and associated documentation files (the "Software"), to deal in the Software
 without restriction, including without limitation the rights to use, copy, modify,
 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
 permit persons to whom the Software is furnished to do so, subject to the following
 conditions:
 .
 The above copyright notice and this permission notice shall be included in all copies
 or substantial portions of the Software.
 .
 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
 CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
 OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
This machine-readable file lists all of the different licenses encountered in this package. It requires that the maintainer audits the upstream code for any copyright statements that might be present in addition to the license of the package as a whole.

debian/docs
README.md
This file contains a list of upstream files that will be copied into the /usr/share/doc/node-libravatar/ directory by dh_installdocs.

debian/node-libravatar.install
lib/*    usr/lib/nodejs/
The install file is used by dh_install to supplement the work done by dh_auto_install which, as we have seen earlier, essentially just runs make install on the upstream Makefile. Looking at that upstream Makefile, it becomes clear that the files will need to be installed manually by the Debian package since that Makefile doesn't have an install target.

debian/watch
version=3
https://github.com/fmarier/node-libravatar/tags /fmarier/node-libravatar/archive/node-libravatar-([0-9.]+)\.tar\.gz
This is the file that allows Debian tools like the Package Tracking System to automatically detect that a new upstream version is available. What it does is simply visit the upstream page which contains all of the release tarballs and look for links which have an href matching the above regular expression. Running uscan --report --verbose will show us all of the tarballs that can be automatically discovered using this watch file:
-- Scanning for watchfiles in .
-- Found watchfile in ./debian
-- In debian/watch, processing watchfile line:
   https://github.com/fmarier/node-libravatar/tags /fmarier/node-libravatar/archive/node-libravatar-([0-9.]+)\.tar\.gz
-- Found the following matching hrefs:
     /fmarier/node-libravatar/archive/node-libravatar-2.0.0.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.1.1.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.1.0.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.0.1.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.0.0.tar.gz
Newest version on remote site is 2.0.0, local version is 2.0.0
 => Package is up to date
-- Scan finished

pylibravatar This second package is the equivalent Python library for the Libravatar service. Version 1.6-2 of that package contains similar files in its debian/ directory, but let's look at two in particular:
  • control
  • upstream/signing-key.asc

debian/control
Source: pylibravatar
Section: python
Priority: optional
Maintainer: Francois Marier <francois@debian.org>
Build-Depends: debhelper (>= 9), python-all, python3-all
Standards-Version: 3.9.5
Homepage: https://launchpad.net/pyLibravatar
...
Package: python-libravatar
Architecture: all
Depends: $ misc:Depends , $ python:Depends , python-dns, python
Description: Libravatar module for Python 2
 Module to make use of the federated Libravatar.org avatar hosting service
 from within Python applications.
...
Package: python3-libravatar
Architecture: all
Depends: $ misc:Depends , $ python3:Depends , python3-dns, python3
Description: Libravatar module for Python 3
 Module to make use of the federated Libravatar.org avatar hosting service
 from within Python applications.
...
Here is an example of a source package (pylibravatar) which builds two separate binary packages: python-libravatar and python3-libravatar. This highlights the fact that a given upstream source can be split into several binary packages in the archive when it makes sense. In this case, there is no point in Python 2 applications pulling in the Python 3 files, so the two separate packages make sense. Another common example is the use of a -doc package to separate the documentation from the rest of a package so that it doesn't need to be installed on production servers for example.

debian/upstream/signing-key.asc
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQINBEpQYz4BEAC7REQD1za69RUnkt6nRCFhSJmmoeJc+yEiWTKc9GOIMAwJDme1
+CMYgVn4Xzf1VQYwD/lE+mfWgyeMomLQjDM1mxx/LOM2a1WWPOk9+PvQwKfRJy92
...
UxDtZm/4yUmU6KvHvOGiDCMuIiB+MqhqJJ5wf80wXhzu8nmC+fyGt6nvu0ggMle8
sAMgXt/aQUTZE5zNCQ==
=RkTO
-----END PGP PUBLIC KEY BLOCK-----
This is simply the OpenPGP key that the upstream developer uses to sign release tarballs. Since PGP signatures are available on the upstream download page, it's possible to instruct uscan to check signatures before downloading tarballs. The way to do that is to use the pgpsigurlmange option in debian/watch:
version=3
opts=pgpsigurlmangle=s/$/.asc/ https://pypi.python.org/pypi/pyLibravatar https://pypi.python.org/packages/source/p/pyLibravatar/pyLibravatar-(.*)\.tar\.gz
which is simply a regular expression replacement string which takes the tarball URL and converts it to the URL of the matching PGP signature.

fcheck The last package we will look at is a file integrity checker. It essentially goes through all of the files in /usr/bin/ and /usr/lib/ and stores a hash of them in its database. When one of these files changes, you get an email. In particular, we will look at the following files in the debian/ directory of version 2.7.59-18:
  • dirs
  • fcheck.cron.d
  • fcheck.postrm
  • fcheck.postinst
  • patches/
  • README.Debian
  • rules
  • source/format

debian/patches This directory contains ten patches as well as a file called series which lists the patches that should be applied to the upstream source and in which order. Should you need to temporarily disable a patch, simply remove it from this file and it will no longer be applied at build time. Let's have a look at patches/04_cfg_sha256.patch:
Description: Switch to sha256 hash algorithm
Forwarded: not needed
Author: Francois Marier <francois@debian.org>
Last-Update: 2009-03-15
--- a/fcheck.cfg
+++ b/fcheck.cfg
@@ -149,8 +149,7 @@ TimeZone        = EST5EDT
 #$Signature      = /usr/bin/sum
 #$Signature      = /usr/bin/cksum
 #$Signature      = /usr/bin/md5sum
-$Signature      = /bin/cksum
-
+$Signature      = /usr/bin/sha256sum
 # Include an optional configuration file.
This is a very simple patch which changes the default configuration of fcheck to promote the use of a stronger hash function. At the top of the file is a bunch of metadata in the DEP-3 format. Why does this package contain so many customizations to the upstream code when Debian's policy is to push fixes upstream and work towards reduce the delta between upstream and Debian's code? The answer can be found in debian/control:
Homepage: http://web.archive.org/web/20050415074059/www.geocities.com/fcheck2000/
This package no longer has an upstream maintainer and its original source is gone. In other words, the Debian package is where all of the new bug fixes get done.

debian/source/format
3.0 (quilt)
This file contains what is called the source package format. What it basically says is that the patches found in debian/patches/ will be applied to the upstream source using the quilt tool at build time.

debian/fcheck.postrm
#!/bin/sh
# postrm script for fcheck
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#        * <postrm>  remove'
#        * <postrm>  purge'
#        * <old-postrm>  upgrade' <new-version>
#        * <new-postrm>  failed-upgrade' <old-version>
#        * <new-postrm>  abort-install'
#        * <new-postrm>  abort-install' <old-version>
#        * <new-postrm>  abort-upgrade' <old-version>
#        * <disappearer's-postrm>  disappear' <overwriter>
#          <overwriter-version>
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
case "$1" in
    remove upgrade failed-upgrade abort-install abort-upgrade disappear)
    ;;
    purge)
      if [ -e /var/lib/fcheck/fcheck.dbf ]; then
        echo "Purging old database file ..."
        rm -f /var/lib/fcheck/fcheck.dbf
      fi
      rm -rf /var/lib/fcheck
      rm -rf /var/log/fcheck
      rm -rf /etc/fcheck
    ;;
    *)
        echo "postrm called with unknown argument \ $1'" >&2
        exit 1
    ;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0
This script is one of the many possible maintainer scripts that a package can provide if needed. This particular one, as the name suggests, will be run after the package is removed (apt-get remove fcheck) or purged (apt-get remove --purge fcheck). Looking at the case statement above, it doesn't do anything extra in the remove case, but it deletes a few files and directories when called with the purge argument.

debian/README.Debian This optional README file contains Debian-specific instructions that might be useful to users. It supplements the upstream README which is often more generic and cannot assume a particular system configuration.

debian/rules
#!/usr/bin/make -f
# -*- makefile -*-
# Sample debian/rules that uses debhelper.
# This file was originally written by Joey Hess and Craig Small.
# As a special exception, when this file is copied by dh-make into a
# dh-make output file, you may use that output file without restriction.
# This special exception was added by Craig Small in version 0.37 of dh-make.
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
build-arch:
build-indep:
build: build-stamp
build-stamp:
    dh_testdir
    pod2man --section=8 $(CURDIR)/debian/fcheck.pod > $(CURDIR)/fcheck.8
    touch build-stamp
clean:
    dh_testdir
    dh_testroot
    rm -f build-stamp 
    rm -f $(CURDIR)/fcheck.8
    dh_clean
install: build
    dh_testdir
    dh_testroot
    dh_prep
    dh_installdirs
    cp $(CURDIR)/fcheck $(CURDIR)/debian/fcheck/usr/sbin/fcheck
    cp $(CURDIR)/fcheck.cfg $(CURDIR)/debian/fcheck/etc/fcheck/fcheck.cfg
# Build architecture-independent files here.
binary-arch: build install
# Build architecture-independent files here.
binary-indep: build install
    dh_testdir
    dh_testroot
    dh_installdocs
    dh_installcron
    dh_installman fcheck.8
    dh_installchangelogs
    dh_installexamples
    dh_installlogcheck
    dh_link
    dh_strip
    dh_compress
    dh_fixperms
    dh_installdeb
    dh_shlibdeps
    dh_gencontrol
    dh_md5sums
    dh_builddeb
binary: binary-indep binary-arch
.PHONY: build clean binary-indep binary-arch binary install
This is an example of a old-style debian/rules file which you still encounter in packages which haven't yet upgraded to the latest version of debhelper 9, as can be shown by the contents of debian/compat:
8
It does essentially the same thing that what we've seen in the build log, but in a more verbose way.

debian/dirs
usr/sbin
etc/fcheck
This file contains a list of directories that dh_installdirs will create in the build directory. The reason why these directories need to be created is that files are copied into these directories in the install target of the debian/rules file. Note that this is different from directories which are created at the time of installation of the package. In that case, the directory (e.g. /var/log/fcheck/) must be created in the postinst script and removed in the postrm script.

debian/fcheck.cron.d
#
# Regular cron job for the fcheck package
#
30 */2  * * *   root    test -x /usr/sbin/fcheck && if ! nice ionice -c3 /usr/sbin/fcheck -asxrf /etc/fcheck/fcheck.cfg >/var/run/fcheck.out 2>&1; then mailx -s "ALERT: [fcheck]  hostname --fqdn " root </var/run/fcheck.out ; /usr/sbin/fcheck -cadsxlf /etc/fcheck/fcheck.cfg ; fi ; rm -f /var/run/fcheck.out
This file is the cronjob which drives the checks performed by this package. It will be copied to /etc/cron.d/fcheck by dh_installcron.

3 May 2014

Dirk Eddelbuettel: RcppSMC 0.1.3 and 0.1.4

The very useful Valgrind tool had flagged an actual error in the package which the CRAN maintainers asked us to address. This was followed by a minor brown-bad oversight of a missing delete, also tagged by Valgrind. Both are pretty ancient bugs which we probably should have found aeons ago. Releases 0.1.3 and 0.1.4 made it to CRAN yesterday in short succession. To recap, RcppSMC combines the SMCTC template classes for Sequential Monte Carlo and Particle Filters (Johansen, 2009, JSS) with the Rcpp package for R/C++ Integration (Eddelbuettel and Francois, 2011, JSS) and thereby allows for easier and more direct access from R to the computational core of the Sequential Monte Carlo algorithm. The two NEWS entries are below:
Changes in RcppSMC version 0.1.4 (2014-05-02)
  • Added missing delete operator to destructor in sampler
Changes in RcppSMC version 0.1.3 (2014-05-01)
  • Bugfix in Sampler.iterate() for memory overrun detected by valgrind
Courtesy of CRANberries, there are also diffstat reports for 0.1.3. and 0.1.4. As always, more detailed information is on the RcppSMC page,

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

1 March 2014

Francois Marier: Using vnc to do remote tech support over high-latency networks

If you ever find yourself doing a bit of technical support for relatives over the phone, there's nothing like actually seeing what they are doing on their computer. One of the best tools for such remote desktop sharing is vnc. Here's the best setup I have come up with so far. If you have any suggestions, please leave a comment!

Basic vnc configuration First off, you need two things: a vnc server on your relative's machine and a vnc client on yours. Thanks to vnc being an open protocol, there are many choices for both. I eventually settled on x11vnc for the server and ssvnc for the client. They are both available in the standard Debian and Ubuntu repositories. Since I have ssh access on the machine that needs to run the server, I simply login and then run x11vnc. Here's what ~/.x11vnrc contains:
noxdamage
That option appears to be necessary when the desktop to share is running gnome-shell / compiz. Afterwards, I start the client on my laptop with the following command:
ssvncviewer -encodings zrle -scale 1280x775 localhost
The scaling factor is simply the resolution of the client minus any window decorations.

ssh configuration As you can see above, the client is not connecting directly to the server. Instead it's connecting to its own vnc port (localhost:5900). That's because I'm tunelling the traffic through the ssh connection in order to avoid relying on vnc extensions for authentication and encryption. Here's what the client's ~/.ssh/config needs for that simple use case:
Host server.example.com:
  LocalForward 5900 127.0.0.1:5900
If the remote host (which has an internal IP address of 192.168.1.2 in this example) is not connected directly to the outside world and instead goes through a gateway, then your ~/.ssh/config will look like this:
Host gateway.example.com:
  ForwardAgent yes
  LocalForward 5900 192.168.1.2:5900
Host server.example.com:
  ProxyCommand ssh -q -a gateway.example.com nc -q0 %h 22
and the remote host will need to open up a port on its firewall for the gateway (internal IP address of 192.168.1.1 here):
iptables -A INPUT -p tcp --dport 5900 -s 192.168.1.1/32 -j ACCEPT

Optimizing for high-latency networks Since I do most of my tech support over a very high latency network, I tweaked the default vnc settings to reduce the amount of network traffic. I added this to ~/.x11vncrc on the vnc server:
ncache 10
ncache_cr
and changed the client command line to this:
ssvncviewer -compresslevel 9 -quality 3 -bgr233 -encodings zrle -use64 -scale 1280x775 -ycrop 1024 localhost
This decreases image quality (and required bandwidth) and enables client-side caching. The magic 1024 number is simply the full vertical resolution of the remote machine, which sports a vintage 1280x1024 LCD monitor.

20 February 2014

Francois Marier: Hardening ssh Servers

Basic configuration There are a few basic things that most admins will already know (and that tiger will warn you about if you forget):
  • only allow version 2 of the protocol
  • disable root logins
  • disable password authentication
This is what /etc/ssh/sshd_config should contain:
Protocol 2
PasswordAuthentication no
PermitRootLogin no

Whitelist approach to giving users ssh access To ensure that only a few users have ssh access to the server and that newly created users don't have it enabled by default, create a new group:
addgroup sshuser
and then add the relevant users to it:
adduser francois sshuser
Finally, add this to /etc/ssh/sshd_config:
AllowGroups sshuser

Deterring brute-force (or dictionary) attacks One way to ban attackers who try to brute-force your ssh server is to install the fail2ban package. It keeps an eye on the ssh log file (/var/log/auth.log) and temporarily blocks IP addresses after a number of failed login attempts. Another approach is to hide the ssh service using Single-Packet Authentication. I have fwknop installed on some of my servers and use small wrapper scripts to connect to them.

Using restricted shells For those users who only need an ssh account on the server in order to transfer files (using scp or rsync), it's a good idea to set their shell (via chsh) to a restricted one like rssh. Should they attempt to log into the server, these users will be greeted with the following error message:
This account is restricted by rssh.
Allowed commands: rsync 
If you believe this is in error, please contact your system administrator.
Connection to server.example.com closed.

Restricting authorized keys to certain IP addresses In addition to listing all of the public keys that are allowed to log into a user account, the ~/.ssh/authorized_keys file also allows (as the man page points out) a user to impose a number of restrictions. Perhaps the most useful option is from which allows a user to restrict the IP addresses which can login using a specific key. Here's what one of my authorized_keys looks like:
from="192.0.2.2" ssh-rsa AAAAB3Nz...zvCn bot@example
You may also want to include the following options to each entry: no-X11-forwarding, no-user-rc, no-pty, no-agent-forwarding and no-port-forwarding.

Increasing the amount of logging The first thing I'd recommend is to increase the level of verbosity in /etc/ssh/sshd_config:
LogLevel VERBOSE
which will, amongst other things, log the fingerprints of keys used to login:
sshd: Connection from 192.0.2.2 port 39671
sshd: Found matching RSA key: de:ad:be:ef:ca:fe
sshd: Postponed publickey for francois from 192.0.2.2 port 39671 ssh2 [preauth]
sshd: Accepted publickey for francois from 192.0.2.2 port 39671 ssh2 
Secondly, if you run logcheck and would like to whitelist the "Accepted publickey" messages on your server, you'll have to start by deleting the first line of /etc/logcheck/ignore.d.server/sshd. Then you can add an entry for all of the usernames and IP addresses that you expect to see. Finally, it is also possible to log all commands issued by a specific user over ssh by enabling the pam_tty_audit module in /etc/pam.d/sshd:
session required pam_tty_audit.so enable=francois
However this module is not included in wheezy and has only recently been re-added to Debian.

Identitying stolen keys One thing I'd love to have is a way to identify a stolen public key. Given the IP restrictions described above, if a public key is stolen and used from a different IP, I will see something like this in /var/log/auth.log:
sshd: Connection from 198.51.100.10 port 39492
sshd: Authentication tried for francois with correct key but not from a permitted host (host=198.51.100.10, ip=198.51.100.10).
sshd: Failed publickey for francois from 198.51.100.10 port 39492 ssh2
sshd: Connection closed by 198.51.100.10 [preauth]
So I can get the IP address of the attacker (likely to be a random VPS or a Tor exit node), but unfortunately, the key fingerprints don't appear for failed connections like they do for successful ones. So I don't know which key to revoke. Is there any way to identify which key was used in a failed login attempt or is the solution to only ever have a single public key in each authorized_keys file and create a separate user account for each user?

11 February 2014

Dirk Eddelbuettel: RcppSMC 0.1.2

Late last week, and just before leaving to participate in this crazy thing, I managed to get a new version of RcppSMC onto CRAN. RcppSMC combines the SMCTC template classes for Sequential Monte Carlo and Particle Filters (Johansen, 2009, JSS) with the Rcpp package for R/C++ Integration (Eddelbuettel and Francois, 2011, JSS) and thereby allows for easier and more direct access from R to the computational core of the Sequential Monte Carlo algorithm. This release regroups a few minor changes we accumulated over the last few months, but was triggered by the Rcpp 0.11.0 release last week. The NEWS entry is below:
Changes in RcppSMC version 0.1.2 (2014-02-06)
  • Updated for Rcpp 0.11.0 with explicit importFrom in NAMESPACE and corresponding versioned Imports: in DESCRIPTION; also removed linking instruction from src/Makevars as no it is longer needed with this new Rcpp version
  • Addded GitHub / Travis CI support
  • Use more portable dev.new() rather than x11() in pfLinearBS.R
  • Applied some corrections to pfNonlinBS.R example
  • Converted NEWS to NEWS.Rd
Courtesy of CRANberries, there is also a diffstat report relative to the previous release. As always, more detailed information is on the RcppSMC page,

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

2 January 2014

Francois Marier: Running your own XMPP server on Debian or Ubuntu

In order to get closer to my goal of reducing my dependence on centralized services, I decided to setup my own XMPP / Jabber server on a Linode VPS running Debian wheezy. I chose ejabberd since it was recommended by the RTC Quick Start website and here's how I put everything together.

DNS and SSL My personal domain is fmarier.org and so I created the following DNS records:
jabber-gw            CNAME    fmarier.org.
_xmpp-client._tcp    SRV      5 0 5222 jabber-gw.fmarier.org.
_xmpp-server._tcp    SRV      5 0 5269 jabber-gw.fmarier.org.
Then I went to get a free XMPP SSL certificate for jabber-gw.fmarier.org from StartSSL. This is how I generated the CSR (Certificate Signing Request) on a high-entropy machine:
openssl req -new -newkey rsa:2048 -nodes -out ssl.csr -keyout ssl.key -subj "/C=NZ/CN=jabber-gw.fmarier.org"
I downloaded the signed certificate as well as the StartSSL intermediate certificate and combined them this way:
cat ssl.crt ssl.key sub.class1.server.ca.pem > ejabberd.pem

ejabberd installation Installing ejabberd on Debian is pretty simple and I mostly followed the steps on the Ubuntu wiki with an additional customization to solve the Pidgin "Not authorized" connection problems.
  1. Install the package, using "admin" as the username for the administrative user:
    apt-get install ejabberd
    
  2. Set the following in /etc/ejabberd/ejabberd.cfg (don't forget the trailing dots!):
     acl, admin,  user, "admin", "fmarier.org" .
     hosts, ["fmarier.org"] .
     fqdn, "jabber-gw.fmarier.org" .
    
  3. Copy the SSL certificate into the /etc/ejabberd/ directory and set the permissions correctly:
    chown root:ejabberd /etc/ejabberd/ejabberd.pem
    chmod 640 /etc/ejabberd/ejabberd.pem
    
  4. Restart the ejabberd daemon:
    /etc/init.d/ejabberd restart
    
  5. Create a new user account for yourself:
    ejabberdctl register me fmarier.org P@ssw0rd1!
    
  6. Open up the following ports on the server's firewall:
    iptables -A INPUT -p tcp --dport 5222 -j ACCEPT
    iptables -A INPUT -p tcp --dport 5269 -j ACCEPT
    

Client setup On the client side, if you use Pidgin, create a new account with the following settings in the "Basic" tab:
  • Protocol: XMPP
  • Username: me
  • Domain: fmarier.org
  • Password: P@ssw0rd1!
and the following setting in the "Advanced" tab:
  • Connection security: Require encryption
From this, I was able to connect to the server without clicking through any certificate warnings. If you want to make sure that XMPP federation works, add your GMail address as a buddy to the account and send yourself a test message. In this example, the XMPP address I give to my friends is me@fmarier.org.

22 December 2013

Francois Marier: Creating a Linode-based VPN setup using OpenVPN on Debian or Ubuntu

Using a Virtual Private Network is a good way to work-around geoIP restrictions but also to protect your network traffic when travelling with your laptop and connecting to untrusted networks. While you might want to use Tor for the part of your network activity where you prefer to be anonymous, a VPN is a faster way to connect to sites that already know you. Here are my instructions for setting up OpenVPN on Debian / Ubuntu machines where the VPN server is located on a cheap Linode virtual private server. They are largely based on the instructions found on the Debian wiki. An easier way to setup an ad-hoc VPN is to use sshuttle but for some reason, it doesn't seem work on Linode or Rackspace virtual servers.

Generating the keys Make sure you run the following on a machine with good entropy and not a VM! I personally use a machine fitted with an Entropy Key. The first step is to install the required package:
sudo apt-get install openvpn
Then, copy the following file in your home directory (no need to run any of this as root):
mkdir easy-rsa
cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ easy-rsa/
cd easy-rsa/2.0
and put something like this in your ~/easy-rsa/2.0/vars:
export KEY_SIZE=2084
export KEY_COUNTRY="NZ"
export KEY_PROVINCE="AKL"
export KEY_CITY="Auckland"
export KEY_ORG="fmarier.org"
export KEY_EMAIL="francois@fmarier.org"
export KEY_CN=hafnarfjordur.fmarier.org
export KEY_NAME=hafnarfjordur.fmarier.org
export KEY_OU=VPN
Create this symbolic link:
ln -s openssl-1.0.0.cnf openssl.cnf
and generate the keys:
. ./vars
./clean-all
./build-ca
./build-key-server server  # press ENTER at every prompt, no password
./build-key akranes  # "akranes" as Name, no password
./build-dh
/usr/sbin/openvpn --genkey --secret keys/ta.key

Configuring the server On my server, a Linode VPS called hafnarfjordur.fmarier.org, I installed the openvpn package:
apt-get install openvpn
and then copied the following files from my high-entropy machine:
cp ca.crt dh2048.pem server.key server.crt ta.key /etc/openvpn/
chown root:root /etc/openvpn/*
chmod 600 /etc/openvpn/ta.key /etc/openvpn/server.key
Then I took the official configuration template:
cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
gunzip /etc/openvpn/server.conf.gz
and set the following in /etc/openvpn/server.conf:
dh dh2048.pem
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 74.207.241.5"
push "dhcp-option DNS 74.207.242.5"
tls-auth ta.key 0
cipher AES-128-CBC
user nobody
group nogroup
(These DNS servers are the ones I found in /etc/resolv.conf on my Linode VPS.) Finally, I added the following to these configuration files:
  • /etc/sysctl.conf:
    net.ipv4.ip_forward=1
    
  • /etc/rc.local (just before exit 0):
    iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
    
  • /etc/default/openvpn:
    AUTOSTART="all"
    
and ran sysctl -p before starting OpenVPN:
/etc/init.d/openvpn start
If the server has a firewall, you'll need to open up this port:
iptables -A INPUT -p udp --dport 1194 -j ACCEPT

Configuring the client The final piece of this solution is to setup my laptop, akranes, to connect to hafnarfjordur by installing the relevant Network Manager plugin:
apt-get install network-manager-openvpn-gnome
The laptop needs these files from the high-entropy machine:
cp ca.crt akranes.crt akranes.key ta.key /etc/openvpn/
chown root:francois /etc/openvpn/akranes.key /etc/openvpn/ta.key
chmod 640 /etc/openvpn/ta.key /etc/openvpn/akranes.key
and my own user needs to have read access to the secret keys. To create a new VPN, right-click on Network-Manager and add a new VPN connection of type "OpenVPN":
  • Gateway: hafnarfjordur.fmarier.org
  • Type: Certificates (TLS)
  • User Certificate: /etc/openvpn/akranes.crt
  • CA Certificate: /etc/openvpn/ca.crt
  • Private Key: /etc/openvpn/akranes.key
  • Available to all users: NO
then click the "Avanced" button and set the following:
  • General
    • Use LZO data compression: YES
  • Security
    • Cipher: AES-128-CBC
    • HMAC Authentication: Default
  • TLS Authentication
    • Subject Match: server
    • Verify peer (server) certificate usage signature: YES
    • Remote peer certificate TLS type: Server
    • Use additional TLS authentication: YES
    • Key File: /etc/openvpn/ta.key
    • Key Direction: 1

Debugging If you run into problems, simply take a look at the logs while attempting to connect to the server:
tail -f /var/log/syslog
on both the server and the client. In my experience, searching for the error messages you find in there is usually enough to solve the problem.

Next steps The next thing I'm going to add to this VPN setup is a local unbound DNS resolver that will be offered to all clients. Is there anything else you have in your setup and that I should consider adding to mine?

Next.

Previous.